Arm backend: Fix integer overflow in VGFBackend IO size computation#19256
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19256
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit e0be5f8 with merge base 02f9866 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@claude review |
This PR needs a
|
|
Claude finished @lucylq's task in 1m 6s —— View job Code Review: Fix integer overflow in VGFBackend IO size computation
Overall: This is a solid security fix. The approach of using Minor issues:
Looks good to merge. The only actionable item is the stale |
37857e0 to
512c1e4
Compare
There was a problem hiding this comment.
Pull request overview
This PR hardens VGFBackend’s IO buffer-size computation by adding explicit overflow checking and rejecting negative dimensions, preventing silent wraparound that could lead to incorrect memcpy sizes and potential out-of-bounds memory access when transferring tensor data to/from mapped Vulkan memory.
Changes:
- Replaced
std::accumulate(..., std::multiplies<>())with an explicit loop usingc10::mul_overflows()to detect multiplication overflow. - Added validation to reject negative IO dimensions before casting to
size_t. - Updated includes to support safe numerics and
PRId64formatting.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| #include <cinttypes> | ||
| #include <list> | ||
| #include <numeric> | ||
| using namespace std; | ||
|
|
||
| #include <c10/util/safe_numerics.h> |
| */ | ||
|
|
||
| #include <cinttypes> | ||
| #include <list> |
| size_t io_size = io->elt_size; | ||
| for (int64_t dim : io->size) { | ||
| ET_CHECK_OR_RETURN_ERROR( | ||
| dim >= 0, | ||
| InvalidArgument, | ||
| "Negative dimension in IO size: %" PRId64, | ||
| dim); | ||
| ET_CHECK_OR_RETURN_ERROR( | ||
| !c10::mul_overflows(io_size, static_cast<size_t>(dim), &io_size), | ||
| InvalidArgument, | ||
| "Overflow computing IO buffer size"); | ||
| } |
Replace std::accumulate with std::multiplies<>() with an explicit loop using c10::mul_overflows() to detect overflow before each multiplication. The previous code would silently wrap on overflow, producing an undersized memcpy size that could lead to out-of-bounds reads/writes when copying tensor data to/from Vulkan device memory. Also reject negative dimensions before casting to size_t. Addresses TOB-EXECUTORCH-27. This PR was authored with the assistance of Claude.
512c1e4 to
e0be5f8
Compare
…ytorch#19256) Replace std::accumulate with std::multiplies<>() with an explicit loop using c10::mul_overflows() to detect overflow before each multiplication. The previous code would silently wrap on overflow, producing an undersized memcpy size that could lead to out-of-bounds reads/writes when copying tensor data to/from Vulkan device memory. Also reject negative dimensions before casting to size_t. This PR was authored with the assistance of Claude. cc @digantdesai @freddan80 @per @zingo @oscarandersson8218 @mansnils @Sebastian-Larsson @robell Co-authored-by: Github Executorch <github_executorch@arm.com>
Replace std::accumulate with std::multiplies<>() with an explicit loop using c10::mul_overflows() to detect overflow before each multiplication. The previous code would silently wrap on overflow, producing an undersized memcpy size that could lead to out-of-bounds reads/writes when copying tensor data to/from Vulkan device memory.
Also reject negative dimensions before casting to size_t.
This PR was authored with the assistance of Claude.
cc @digantdesai @freddan80 @per @zingo @oscarandersson8218 @mansnils @Sebastian-Larsson @robell